General inertial proximal stochastic variance reduction gradient for nonconvex nonsmooth optimization
نویسندگان
چکیده
Abstract In this paper, motivated by the competitive performance of proximal stochastic variance reduction gradient (Prox-SVRG) method, a novel general inertial Prox-SVRG (GIProx-SVRG) algorithm is proposed for solving class nonconvex finite sum problems. More precisely, Nesterov’s momentum trick-based extrapolation accelerated step incorporated into framework method. The GIProx-SVRG possesses more expression and thus can potentially achieve convergence speed. Moreover, based on supermartingale theory error bound condition, we establish linear rate iterate sequence generated algorithm. We observe that there no in which technique whereas such paper. Experimental results demonstrate superiority our method over state-of-the-art methods.
منابع مشابه
A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization
We analyze stochastic gradient algorithms for optimizing nonconvex, nonsmooth finite-sum problems. In particular, the objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a possibly non-differentiable but convex component. We propose a proximal stochastic gradient algorithm based on variance reduction, called ProxSVRG+. The algorithm is ...
متن کاملStochastic Variance Reduction for Nonconvex Optimization
We study nonconvex finite-sum problems and analyze stochastic variance reduced gradient (Svrg) methods for them. Svrg and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent (Sgd); but their theoretical analysis almost exclusively assumes convexity. In contrast, we prove non-asymptotic rates of convergence (to stationary...
متن کاملAsynchronous Stochastic Proximal Methods for Nonconvex Nonsmooth Optimization
We study stochastic algorithms for solving non-convex optimization problems with a convex yet possibly non-smooth regularizer, which nd wide applications in many practical machine learning applications. However, compared to asynchronous parallel stochastic gradient descent (AsynSGD), an algorithm targeting smooth optimization, the understanding of the behavior of stochastic algorithms for the n...
متن کاملProximal Stochastic Methods for Nonsmooth Nonconvex Finite-Sum Optimization
We analyze stochastic algorithms for optimizing nonconvex, nonsmooth finite-sum problems, where the nonsmooth part is convex. Surprisingly, unlike the smooth case, our knowledge of this fundamental problem is very limited. For example, it is not known whether the proximal stochastic gradient method with constant minibatch converges to a stationary point. To tackle this issue, we develop fast st...
متن کاملVariance Reduction for Stochastic Gradient Optimization
Stochastic gradient optimization is a class of widely used algorithms for training machine learning models. To optimize an objective, it uses the noisy gradient computed from the random data samples instead of the true gradient computed from the entire dataset. However, when the variance of the noisy gradient is large, the algorithm might spend much time bouncing around, leading to slower conve...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Inequalities and Applications
سال: 2023
ISSN: ['1025-5834', '1029-242X']
DOI: https://doi.org/10.1186/s13660-023-02922-4